perm filename TRACES[DIS,DBL]2 blob sn#213805 filedate 1976-05-04 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00008 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002	.ASEC(Traces of AM in Action)
C00004 00003	.ASSEC(Prose Traces)
C00006 00004	. ASSSEC(Structures)
C00030 00005	. ASSSEC(Numbers)
C00038 00006	. ASSSEC(Geometry)
C00039 00007	.ASSEC(A `Nice' Trace)
C00040 00008	.ASSEC(An `Unadulterated' Trace)
C00041 ENDMK
C⊗;
.ASEC(Traces of AM in Action)

There are three types of traces which are rpesent in this appendix.
First comes a high-level prose desription, a running commentary on
AM as it goes through a long, successful run.

.ONCE TURN ON "{}"

The next section presents a few "cleaned-up" traces of the system's output,
much in the style of the example in chapter {[2]EXAM1}.

Finally, a couple pages of traces are presented in totally undoctored
form, so the reader can see that (i) it ⊗4is⊗* harder to follow than the
slightly rephrased ones, and yet (ii) those earlier, "doctored" traces didn't
enhance (or alter the the spirit of) what AM printed out.

.ASSEC(Prose Traces)

In this section are sketched all the paths which AM explored during its runs.

The  three subsections represent the major thrusts of AM's research:
structures, numbers, and geometry.
These subsections list all the paths  which AM followed, explain why,
and  indicate where  they led.    Along the  way, some  concepts were
created which  were interesting  to  ⊗4us⊗* (in  the smug  wisdom  of
millenia of hindsight) but which  AM never bothered to develop. These
will  be noted,  and a stab  will be made  to apologize  for AM$$ The
typical excuse is that AM was  distracted at that moment by  some even
more interesting task. $. A few exciting moments occurred when AM became
interested in a concept which had been ignored by humans. One instance
of this led to an unusual characterization of numbers with an abnormally
large number of divisors; another time, AM found an "application" of
Goldbach's conjecture. In other such unexpected fixations, the concept
has still not proven to be anything other than "cute" (e.g., triangles
are related by R iff they share a common side, and they are similar).

. ASSSEC(Structures)

This section deals with AM's exploration of structures and structural
operations.    After  it was  started,  with  the  base of  knowledge
outlined in the previous  chapter, the main  activity AM did for  the
first several  minutes was to  fill in  examples of various  kinds of
structures  (e.g., Sets),  structural operations  (e.g., Insert), and
create new concepts of this type (e.g., Singleton).   In more detail,
here is what was done:


Trying to fill in examples of set-operations, AM kept failing because
there were no known examples of Sets to "run" those operations on. So
it turned to filling in examples of Sets. Some of these came from the
recursive definition of a  set: "S is a set if S={} or if both (i) we
can  pull  an  element  y  out  of  S  using  Some-member,  and  (ii)
Set-delete(y,S) is a set". A heuristic knows to extract the base case
from such a definition, yielding {} as one example of a set.  Another
heuristic says to run operations whose range is Sets. One of these is
Set-insert. So a procedure for getting a new set is to take the given
set S, and anything  y, and run Set-insert(y,S).  When this is  done,
using S={}, a bunch of singletons are created. AM literally collected
Examples(Anything)  and randomly chose  members y from  that big set.
Other set-operations are run just to provide some additional examples
of Sets.   Not every attempt is successful, of  course: one heuristic
says  to find some examples of Lists, and  then use the View facet of
Sets to transform them into sets. Unfortunately, there are no example
of any other  kinds of structures at the moment,  so this rule fails.
After about 20 cpu seconds, the  time and space quanta are both  just
about exhausted.  Roughly 30 examples of sets were found.

In similar ways, examples are found for Bags, Lists, Osets, and Ordered-pairs.
Examples of structural operations are found "incidentally" as above -- to aid
in producing examples of a certain kind of structure. Occasionally, the primary
task (the one selected from the agenda) will be to find examples of a given
operation. In that case, AM spent a great deal of time just on that one operation.
For example, consider Set-union. When AM got around to fillingin some examples for
it, many examples already existed under the concepts of Sets, Bags, and
Bag-union. One way to get examples of Set-union was by analogy to Bag-union.
This caused some slightly erroneous entries to be found
(e.g., {a,b,c}∪{a,c,e}={a,a,b,c,c,e}). Such errors were soon caught and corrected
when the task ⊗6"Check examples of Set-union"⊗* was chosen from  he agenda.
Similar errors annd corrections occurred when AM viewed lists as if they were osets,
in order to find examples of osets.

The simple development described above was broken frequently by some new
concept being created and found to be very interesting. In fact, AM alone
never finished the above process, because of these interruptions. The user
must interrupt and tell it to ignore new concepts, if he really wants AM to
finish finding examples of all those structres and operations. 

What kinds of
concepts were created, why? What did AM do to investigate them? 
In general, what happened was this: As AM collected examples of a concept C,
the characteristics of its efforts (how easy/hard to find examples, how
varied they were, etc.) caused certain heuristic rules to trigger. Those
rules then caused some new concepts to be created, for a particular reason.
AM would find examples of them, then compare its results to already-known
concepts.

The first instance of this process was when AM found many examples of sets
so easily. A rule said that in such cases, it was worthwhile specializing
the concept Sets. That is, make a new concept which would have fewer
examples. One way AM did this was to look over the Interest features of all
generalizations of Sets, pluck a couple of them, and conjon them onto the
definition of Sets, thereby getting a definition for a brand new concept,
called interesting-sets or Isets for short.
The feature selected first was the following: each pair of elements of the
structure satisfy the same rare predicate P, for some P. This was previously
tacked onto the Interest facet of Structures.
Initially, there were very few predicates known: Constantly-T, Constantly-NIL,
Structure-Equality. 
The following Isets were therefore eventually found:
{T} (all T's), {NIL} (all NIL's), {a} (all pairs are Equal to each other),
{T} (all pairs Equal), {} (all T's, all NIL's, all pairs equal}.
This immediately catapulted the empty set to stardom. It also showed
that all such Isets were of the form: sets all pairsof whose elements are Equal.
This became a specialization of Isets, called Equality-sets or Esets for short.
Since the empty set was already distinguished, it was decided not to include
it as a valid Eset. So Esets were precisely the sets we would call Singletons.

Unfortunately, the set-operation Union, when applied to Singletons, didn't
always yield singletons. By isolating the kind of sets they ⊗4did⊗* yield,,
and not counting the few extreme cases when they yielded singletons, AM
discovered the concept of Doubleton: a set with precisely two members.
Typically, the union of Singletons was a doubleton, their intersection
was the empty set, their Set-difference was a singleton, inserting anything
into a singleton was a doubleton, removing something left a singleton, etc.
The exceptions were all related to when the arguments were equal.

Several more structural concepts were created and explored in this way,
besides Singleton:
Doubleton, Tripleton, Function (an operation, all of whose  values are
singleton sets: i.e., a single-valued operation),...
In general, these occurred because the intial primitive concepts were
too general, too easy to satisfy.

When AM looked for examples satisfying the predicate Structure-Equality,
abbreviated Equa, the situation was just the opposite. A heuristic rule
attached to Operation indicated that examples could be found by randomly
instantiating the two arguments of Equa with a pair of structures. There
were about 100 known examples of structures. AM gathered them into one set, and
then repeatedly pulled a pair of them out and ran Equa on them.
Thus only about 1% of the time did it succeed (did Equa return the value T).
Another heuristic triggerred, and said that in such cases, it was worthwhile
trying to generalize the predicate Equa. A new task to this effect was added to the
agenda.

Soon, AM selected this task, and tried to create new concepts which were 
generalizations of Equa. One definition of Equa was a transparent recursive one,
which essentially said that x and y were Equa iff their Cars and their Cdrs were,
plus it had a base step that asked if both arguments were empty
(in which case Equa returned True), or if precisely one argument was empty
(in which case Equa returned False), or if both arguments were identical
atoms (True), or if they were nonidentical atoms or only one was an atom (False).

.B0 INDENT 0

⊗1λ⊗* (x,y)
  IF x and y are identical atoms, THEN return True;		    ⊃
  ELSE IF either x or y is not a list, THEN return False;	    εαα⊗4Base Cases⊗*
       ELSE IF both x and y are Null lists, THEN return True;	    ~
	    ELSE IF only one of x or y is Null, THEN return False;  $
	     	 ELSE both of these must be true:
			Equa( CAR(x), CAR(y) )
			Equa( CDR(x), CDR(y) )
.E

One heuristic rule that AM possessed suggested that this could be generalized
in a small way by weakening the base step. A couple new concepts were created
this way, but they all turned out to be trivial: either constantly returning
T, or no different than Equa was.

Another heuristic suggested weakening the recursive step. One way to do this,
since the recursive step is a conjunction, is to remove one of the conjuncts.
The rule checks to ensure that the definition is still recursive: one of the remaining
conjuncts must call onthe function itself.
In this case, both conjuncts call on Equa, so AM can remove either one. Two new
concepts are generated in this manner. For example, here is one of them, which
AM named "⊗8Equ0⊗*":


.B0 INDENT 0

⊗1λ⊗* (x,y)
  IF x and y are identical atoms, THEN return True;		    ⊃
  ELSE IF either x or y is not a list, THEN return False;	    εαα⊗4Base Cases⊗*
       ELSE IF both x and y are Null lists, THEN return True;	    ~
	    ELSE IF only one of x or y is Null, THEN return False;  $
	     	 ELSE Equ0( CDR(x), CDR(y) )
.E

Note that the base cases are unchanged; the recursive step is a recursion in
the CDR direction, but no longer in the CAR direction. A note to that effect
is placed inside the definition of Equ0 itself, as a comment.
Other parts of Equ0 can be filled in easily: it is a generalization of Equa,
it is an example of a Predicate, its domain/range is the same as Equa,
its worth is initially set a little higher than Equa's, etc.

This predicate maps down two lists, one element at a time, and returns True
iff they both become empty at the same moment. That is, iff they have the
same length. In fact, we can assume that the user interrupts and orders AM
to rename Equ0 as "Same-length".

The other new generalization, Equ1, examines the CAR's (i.e., the first elements)
of a pair of lists, and returns True if they are identical atoms; if they
are both lists, it recurses on those two lists.
A human LISP programmer's interpretation is as follows: when the two lists
are written out in standard notation (using parentheses), all the initial
left parentheses match, and the first non-left-parenthesis entity of each list
also matches.
This is not a particularly common nor interesting operation, and in fact AM
gradually loses interest in it: its Worth slowly declines, and very few
tasks involving it bubble up to the top of the agenda.

This conceptof Same-length will form the springboard for the development of cardinality,
a tale which is related inthe next subsection.
Before moving on, let's mention a couple more set-theoretic activities that AM
did.

Several moderately interesting compositons and coalescings were done to set-operations.
(Actually, to structure-operations). First let's discuss the coalescings of
operations f(x,y) into new operations f(x,x) -- where a pair of arguments is
made to coincide.
Coalescing Insert (insert S into itself) 
led to an operation which always seems to give
a bigger set than it starts with. This led AM to the finitely-true
conjecture that a set is never a member of itself.
The conjecture was confirmed when the coalescing of Delete seemed to produce
the identity operation (Deleting S from S never seemed to change the value of S).
Coalescing Intersect also gave the identity operation; this showed that S∩S=S
(apparently). Similarly for Union.
Coalescing "Compose" gave a new operation of some value: the notion of composing
f with f. When this was applied to, e.g., Intersect, it created the new
operation Intersect-o-Intersect, which took 3 arguements and formed their
common intersection. By forming this in two ways
-- (x∩y)∩z and also x∩(y∩z) -- AM noticed that they were the same, that
the order didn't matter. Since x∩x had already been shown to be the identity
operation, multiple occurrences of an element in an intersection don't make any
difference. Thus ∩ was properly viewed as taking a ⊗4set⊗* of sets as its
argument, and forming the intersection of all those sets. Thus
∩({ {a,b,c},{c,g,h},{a,c,e} })={c}.

Some valuable compositons were formed. By forming x∩(y∪z) and
(x∩y)∪(x∩z) as two separate compsoitions, AM found their equivalence
via experimental data. THis was one case where the Intuition functions
had given AM an unfair advantage, since the Venn-diagram abstract representation
for sets suggested this relationship explicitly. When the intuition was removed,
the discovery was made much more valid. All of de Morgan's laws were discovered
in this manner, incidentally. Several special cases were singled out, involving
empty sets, singletons, and doubletons:
e.g., if x is a singleton, then x∩(yuz) is equal to either x∩y or to x∩z;
if both those sets are the same, then of course x∩(y∪z) is equal to their
common value; if the two sets differ, then one is empty and the other is
x, and the ultimate intersection is equal to x. Or: that intersection is
always either x or the empty set.

The compositon Delete-o-Insert is not quite so trivial as one might think:
it  takes a structre S, inserts an element e, and then removes element e.
This simple operation can be used to test the type of structure that S is:
it ⊗4never⊗* alters a Bag or a List, but it does alter Sets and Osets.
Orthogonally, Insert-o-Delete alters Lists and Osets, but never changes
Bags or Sets. The first composition tests for multiple elements, and the
second composition tests for order.

Operations formed by switching the two arguments of an old operation
are marginally interesting. They help pin down the true nature of what
kinds of argument the operation should be considered as taking.
For example, Union(x,y)=Union(y,x), so Union can take an unordered collection
of sets as its argument, and form their union.
Similarly, we see that Insert(x,y) is in general quite different from Insert(y,x).
When AM tries to see what invariants exist for this operation, it can notice only that
the trivial constraint x=y is sufficient to cause the order of arguments not to
matter. If it had the concept of the LISP function "COUNT", which counts up all the
storage space used, or "FLATTEN", which removes all parentheses from alist strucutre,
then AM would notice that the COUNT's of both forms of Inserting are equal in number,
and that their FLATTEN's are equal sets of elements.

Looking for invariants is one favorite pastime of AM. In general, two operations
f and g will not coincide. AM wants to find a unifying operation U, such that
U(f(x))=U(g(x)); or, AM tries to find a U such that
f(U(x))=g(U(x)). This is of course the idea mathematicians normally refer to as
homomorphism.
AM calls this process Canonizing. Canonizing the two orders of Insert
(x into y, and y into x) would hopefully find the operation U=FLATTEN or
U=COUNT. Canonizing Same-length will cause the operation of Length to be synthesized.
Canonizing the notion of angles-equal-in-measure are the operations we normally
call rigid motions in the plane.
This whole business is related to standard forms or canonical forms.

. ASSSEC(Numbers)

In the last subsection, the operation "Same-length" was synthesized, as
a generalized form of the predicate Equality-of-2-structures.
Same-length(x,y) is True iff x and y are two list structures which have
the same length, the same number of elements.
This new predicate was examined by AM, and sure enough it let many new
pairs of structures return True than Equality did, yet it didn't let too
high a percentage through: just about 10%. This made AM want to find an
invariant, a canonizing function f, which put any given list structure x into
a standard form f(x), satisfying:

⊗6Same-length(x,y) iff Equal(f(x),f(y))⊗*


That is, x and y would have the same length precisely when the standard
forms of x and y were identically equal to each other.

AM had to synthesize this function f, step by step. First it performed
some experiements, and found that Same-length didn't care what the
type of its arguments were. If a certain Set S did/din't satisfy
Same-length(R,S), then the same result would obtain if S were replaced
by Viewing S as a list, or as a bag, or as an Oset. Thus the standard form
of a structure could be fixed as one specific type. But which should it
be (bag,  set, list, oset)? To find out, AM ran two more experiments.
The first experiment was to see whether Same-length(R,S) was affected when
the order of elements inside R were changed. This turned out to be negative.
So R might as well be an unordered structure: bag or set.
The next experiment had AM decide whether or not multple elements inside
R would affect the value of Same-length(R,S). THis turned out positive,
so multiple elements had to be taken into account. THe canonical type
of argument was thus either bag or list. Together, the two experiments
indicated that the type had to be Bag. So the canonizing function f would
first convert any argument R to a bag.
A note tacked onto the Same-length concept's definition said that this
concept didn't look at the Car's or value-cells of its arguments.
That would mean that they should all be replaced by some fixed value, like T.
This checked out experimentally. So f should replace each element in the
structure R by the constant T. The final operation f synthesized was:

⊗6f(R) = Replace-each-element-by-T ( Convert-to--bag (R) )⊗*.

This operation would convert {a, (b,c,{d},e,e), q} into (T,T,T).
The set of standard forms for bags was called Canonical-bags, and renamed
by the user as Numbers. The canonizing operation f was called Count,
by the user, since it converts any structure into a "number" which represents
the length of that structure:

⊗6Same-length(R,S) iff Equal(Count(R),Count(S))⊗*

AM now made explicit an important analogy: bags↔numbers, equal↔same-length,
bag-operations↔[those same operations, restricted to canonical-bags].
Several minutes were spent developing these restricted bag-operations.
The old operation of inserting a bag S into itself provided a cute way to 
"add 1" to
any number.
The Bag-union operation union turned into what we call Addition:

⊗6Bag-union( (T T T) (T T) ) = (T T T T T)⊗* means "3+2=5", in unary.

.ONCE TURN ON "{}"

TIMES was discovered in four ways, as discussed a few pages ago, in
subsection {SECNUM}.{SSECNUM}.1.
The intersection of two "numbers" is the operation we call MINIMUM:

⊗6Intersect((T T T) (T T T T)) = (T T T)⊗* just says "Minimum(3,4)=3".

AM likes to find inverses, and the inverse of these operations turned out
to be the ones we call subtraction, factoring, division, less-than, etc.

AM likes coalescing, and some important operations were created that way, too:
Add(x,x) is Doubling; Times(x,x) is Squaring; the inverses of those are
Halving and Square-rooting.
AM worried about which numbers could be halved or square-rooted,
and this led to the creation of the concepts Even-numbers and Perfect-squares.

AM likes composing and reversing args, 
and thereby figured out that most arithmetic operations
like Add and Times take a ⊗4bag⊗* of numbers to work on.

TIMES-1- was, as we said, factoring: given n, find all possible bags of numbers
whose product yielded n. The identity of Times ("1") was intentionally excluded,
since its presence in any quantity would not affect the result of Times.
AM soon asked itself about numbers with extreme or unusual factorings.

.ONCE TURN ON "{}"

Primes were found in this way, as was Goldbach's conjecture. The example in
chapter {[2]EXAM1} went into this in great detail.

<<Should the above be covered in much more detail?>


. ASSSEC(Geometry)

.ONCE TURN ON "{}"

After the preceding research ground to a halt (see Limitations discussion,
in Section {[2]EVALU}.{[2]DIFSSEC}), the author spent a few hours creating
a supplemental base of primitive concepts, this time dealing with geometry.
Below is a technical description of what AM did. A briefer summary can be
found later inthis chapter, on page..., wherein the whole Geometry escapade
is described as a single experiment performed on AM.

<<Should this Geometry record sec be incorporated into Geom-Expt subsec?>

<< Fill in the detailed record of what AM did>


.ASSEC(A `Nice' Trace)

Include all these examples of AM in Action, in one big "doctored" trace
(Occasionally, give a
"snapshot" of the new jobs, concepts, facet entries, etc.):

 initial behavior of the system

 the example of discovering cardinality

 noticing a "real" number theory conjecture

 geometry example: congruence and similarity.

<<Actually write/Fill in these example traces of AM in action>

.ASSEC(An `Unadulterated' Trace)

.UNADULT: SSECNUM;

.ONCE TURN ON "{}"

Pick a section or two from the trace in Chapter {[2]EXAM1}, or from the
last subsection, and show it the way AM actually typed it.

.CH2EX1: PAGE;

.COMMENT in here goes the actual trace that led to the example in EXAM1;

.PAGE←PAGE+1;

.CH2EX2: PAGE;